Goto

Collaborating Authors

 visual concept-metaconcept learning


Visual Concept-Metaconcept Learning

Neural Information Processing Systems

Humans reason with concepts and metaconcepts: we recognize red and blue from visual input; we also understand that they are colors, i.e., red is an instance of color. In this paper, we propose the visual concept-metaconcept learner (VCML) for joint learning of concepts and metaconcepts from images and associated question-answer pairs. The key is to exploit the bidirectional connection between visual concepts and metaconcepts. Visual representations provide grounding cues for predicting relations between unseen pairs of concepts. Knowing that red and blue are instances of color, we generalize to the fact that green is also an instance of color since they all categorize the hue of objects. Meanwhile, knowledge about metaconcepts empowers visual concept learning from limited, noisy, and even biased data. From just a few examples of purple cubes we can understand a new color purple, which resembles the hue of the cubes instead of the shape of them.


Reviews: Visual Concept-Metaconcept Learning

Neural Information Processing Systems

Overall this is a really interesting idea incorporating concrete visual concepts and more abstract metaconcepts in a joint space and using the learning of one to guide the other. There are some issues below, mostly details about training implementation, that could clear up my questions. 1. Why not use pretrained word embeddings for the GRU model? The issue here is that the object proposal generator was trained on ImageNet, meaning it almost definitely had access to visual information about the held out concepts in Ctest. The GRU baseline, even signficantly less training data, outperforms for instance-of.


Reviews: Visual Concept-Metaconcept Learning

Neural Information Processing Systems

The authors present a joint framework for acquiring both visual concepts of objects and linguistic metaconcepts describing relationships between the visual concepts in visual reasoning tasks (images paired with question-answer pairs), and demonstrate that this works on synthetic and real-world image datasets. Part of the novelty of this work is in incorporating the metaconcepts into visual concept learning, and the proposed model somewhat mirrors how human learning is done. Reviewers would like to see more careful and thorough experimental validation, and are concerned that the metaconcepts are not realistic.

  visual concept-metaconcept learning

Visual Concept-Metaconcept Learning

Neural Information Processing Systems

Humans reason with concepts and metaconcepts: we recognize red and blue from visual input; we also understand that they are colors, i.e., red is an instance of color. In this paper, we propose the visual concept-metaconcept learner (VCML) for joint learning of concepts and metaconcepts from images and associated question-answer pairs. The key is to exploit the bidirectional connection between visual concepts and metaconcepts. Visual representations provide grounding cues for predicting relations between unseen pairs of concepts. Knowing that red and blue are instances of color, we generalize to the fact that green is also an instance of color since they all categorize the hue of objects.

  concept and metaconcept, visual concept-metaconcept learning

Visual Concept-Metaconcept Learning

Han, Chi, Mao, Jiayuan, Gan, Chuang, Tenenbaum, Josh, Wu, Jiajun

Neural Information Processing Systems

Humans reason with concepts and metaconcepts: we recognize red and blue from visual input; we also understand that they are colors, i.e., red is an instance of color. In this paper, we propose the visual concept-metaconcept learner (VCML) for joint learning of concepts and metaconcepts from images and associated question-answer pairs. The key is to exploit the bidirectional connection between visual concepts and metaconcepts. Visual representations provide grounding cues for predicting relations between unseen pairs of concepts. Knowing that red and blue are instances of color, we generalize to the fact that green is also an instance of color since they all categorize the hue of objects.


Visual Concept-Metaconcept Learning

Han, Chi, Mao, Jiayuan, Gan, Chuang, Tenenbaum, Joshua B., Wu, Jiajun

arXiv.org Machine Learning

Humans reason with concepts and metaconcepts: we recognize red and green from visual input; we also understand that they describe the same property of objects (i.e., the color). In this paper, we propose the visual concept-metaconcept learner (VCML) for joint learning of concepts and metaconcepts from images and associated question-answer pairs. The key is to exploit the bidirectional connection between visual concepts and metaconcepts. Visual representations provide grounding cues for predicting relations between unseen pairs of concepts. Knowing that red and green describe the same property of objects, we generalize to the fact that cube and sphere also describe the same property of objects, since they both categorize the shape of objects. Meanwhile, knowledge about metaconcepts empowers visual concept learning from limited, noisy, and even biased data. From just a few examples of purple cubes we can understand a new color purple, which resembles the hue of the cubes instead of the shape of them. Evaluation on both synthetic and real-world datasets validates our claims.

  artificial intelligence, visual concept-metaconcept learning
2002.01464
  Genre: Research Report (0.40)